![]() METHOD FOR PROJECTING AN IMAGE BY A PROJECTION SYSTEM OF A MOTOR VEHICLE, AND ASSOCIATED PROJECTION
专利摘要:
The invention relates to a method for projecting at least one image by a projection system of a motor vehicle. The method comprises the following steps: - detection (30) of a disturbance zone, - determination of the position of the driver in a predefined repository called projection reference, - calculation (46) of a transformation matrix of an image according to the determined conductor position, - generation (54) of the control signal, the control signal being obtained by applying said transformation matrix to at least one stored image, - control (56) of the imaging device from the control signal for projecting a transformed image, the pictogram appearing in a vertical plane (PV) for said conductor in the transformed image, - projecting (58) the transformed image. 公开号:FR3056490A1 申请号:FR1659286 申请日:2016-09-29 公开日:2018-03-30 发明作者:Xavier Morel;Stephan Sommerschuh;Hafid El Idrissi;Weicheng Luo 申请人:Valeo Vision SA; IPC主号:
专利说明:
® FRENCH REPUBLIC NATIONAL INSTITUTE OF INDUSTRIAL PROPERTY © Publication number: 3,056,490 (to be used only for reproduction orders) ©) National registration number: 16 59286 COURBEVOIE ©) Int Cl 8 : B 60 Q 1/50 (2017.01), B 60 Q 1/26, F21 S 41/00 A1 PATENT APPLICATION ©) Date of filing: 09.29.16. ©) Applicant (s): VALEO VISION Joint-stock company ©) Priority: simplified - FR. ©) Inventor (s): MOREL XAVIER, SOMMERSCHUH STEPHAN, EL IDRISSI HAFID and LUO WEICHENG. (43) Date of public availability of the request: 30.03.18 Bulletin 18/13. ©) List of documents cited in the report preliminary research: Refer to end of present booklet @) References to other national documents (73) Holder (s): VALEO VISION Joint stock company related: folded. ©) Extension request (s): ©) Agent (s): VALEO VISION Limited company. METHOD FOR PROJECTING AN IMAGE BY A PROJECTION SYSTEM OF A MOTOR VEHICLE, AND ASSOCIATED PROJECTION SYSTEM. FR 3 056 490 - A1 The invention relates to a method of projecting at least one image by a system for projecting a motor vehicle. The process includes the following steps: - detection (30) of a disturbance zone, determination of the position of the driver in a predefined reference frame called the projection reference frame, - calculation (46) of an image transformation matrix as a function of the position of the determined conductor, - generation (54) of the control signal, the control signal being obtained by applying said transformation matrix to at least one stored image, - control (56) of the imaging device from the control signal to project a transformed image, the pictogram appearing in a vertical plane (PV) for said conductor in the transformed image, - projection (58) of the transformed image. ι Method for projecting an image by a projection system of a motor vehicle, and associated projection system The present invention is in the field of road safety and automotive lighting. It sometimes happens that the roadways are disturbed by road repair works, work to dilate the shrubs planted on the aisles or on the central reservation, maintain the markings on the ground, or quite simply by roughness at the soil such as hollows or potholes. These disturbances are generally signaled by road signs. However, drivers are often surprised by the appearance of these disturbance zones and find it difficult to quickly change lanes, shift their vehicle relative to their course of action or simply to adapt their speed quickly enough. conduct. As a result, accidents are sometimes caused by these disturbance areas on the roads. It is desirable to reinforce the signaling of the disturbance zones by additional signaling. To this end, the subject of the invention is a method of projecting at least one image by a projection system of a motor vehicle comprising a detection device capable of detecting a disturbance zone and of generating an alert signal. , a processing unit capable of generating a control signal; an imaging device suitable for receiving the control signal and projecting a digital image, a storage unit storing at least one image representative of a pictogram, characterized in that the method comprises the following steps: a) detection of a disturbance zone, b) emission of an alert signal, and on reception of the alert signal: c) generation of a control signal, said generation step comprising the following substeps: • determination of the position of the conductor in a predefined reference frame called the projection reference frame, • calculation of an image transformation matrix as a function of the determined position of the conductor, • generation of the control signal, the control signal being obtained by applying said transformation matrix to at least one stored image, d) control of the imaging device from the control signal to project a transformed image, the pictogram appearing in a vertical plane for said conductor in the transformed image, and e) projection of the transformed image. Advantageously, the projection method and the projection system according to the invention provide additional signaling which can be easily and quickly removed. This additional signage is particularly suitable for temporary disturbance zones such as disturbance zones caused by works, mudslides or stones, parking on the track of a broken down vehicle, car accidents etc. According to particular embodiments, the projection method according to the invention includes one or more of the following characteristics: - the transformation matrix is shaped to rotate the stored image by an angle of 90 ° relative to a horizontal axis A extending on the roadway, said horizontal axis being perpendicular to the direction of movement of the vehicle. the step of generating a control signal further includes a step of adding at least one shadow zone to said transformed image so that the pictogram represented on the transformed image is perceived in relief by said driver. - the detection device comprises a receiver capable of receiving an electromagnetic signal, and a geographic location system of the motor vehicle, and in which the step of detecting the disturbance zone comprises the following steps: - determination of the geographical position of the motor vehicle; - reception of an electromagnetic signal capable of indicating at least one zone of disturbance on the road; - determination of the possibility of passage of the motor vehicle in the disturbance zone; - development of an alert signal when the motor vehicle will cross the disturbance zone. - the electromagnetic signal is a radio signal among a radio signal, a signal from a wireless telecommunications network and a signal from a computer network governed by the communication protocol defined by the standards of the IEEE 802.11 group. - the electromagnetic signal is a light signal, for example a signal having a wavelength between 400 and 800 nanometers. - the projection system includes a camera, and in which the detection step comprises the following steps: - acquisition of at least one image representative of the roadway by said camera, and - processing of said at least one acquired image to detect the existence of a disturbance zone - development of an alert signal. - the pictogram is a representative image of an element among a road block, a road sign, guide lines or arrows and site stakes. - The method further comprises a step of capturing an image of the car driver and in which the step of determining the position of the driver is implemented from the captured image. The invention also relates to a system for projecting at least one image of a motor vehicle, said projection system comprising: - a storage unit capable of storing at least one image representing a pictogram; a detection device capable of detecting a disturbance zone, said detection device being adapted to generate an alert signal on detection of the disturbance zone, a processing unit connected to the detection device, the processing unit being suitable for calculating a transformation matrix as a function of the position of the conductor defined in a predefined reference frame called the projection reference frame, and for generating a control signal from the transformation matrix and the stored image, and - an imaging device capable of projecting at least one transformed image from a received control signal, the transformed image being intended to appear in a vertical plane for said driver of the motor vehicle. Alternatively, the projection system includes a light source capable of emitting a light beam towards the imaging device, and a projection device capable of projecting the transformed image onto the roadway. As a variant, the projection system includes an imager capable of capturing the automobile driver and in which the processing unit is able to search for the position of the driver on the captured image and to define the transformation matrix M from the position of the determined driver. The invention will be better understood on reading the description which follows, given solely by way of example and made with reference to the figures in which: - Figure 1 is a schematic view of the projection system according to a first embodiment of the invention, - Figure 2 is a diagram representing the main steps of the projection process according to the invention, FIG. 3 is a diagram representing the steps of a first embodiment of the detection step of the projection method according to the invention, FIG. 4 is a side view of a vehicle equipped with a projection system according to the invention, FIG. 5 is a perspective view of studs which can be imaged by the projection method according to the present invention, FIG. 6 is a diagram representing the steps of a second embodiment of the detection step of the projection method according to the invention, and - Figures 7 to 23 are figures from the patent application PCT / EP2016 / 071596 filed on September 13, 2016. The projection method according to the present invention is implemented by a projection system 2 shown diagrammatically in FIG. 1. This projection system 2 comprises a device 4 for detecting the presence of a disturbance zone, a storage unit 6 suitable for storing images to be projected as well as coordinates of the position of the conductor and a processing unit 10 connected to the detection device 4, storage unit 6 and imager 8. The detection device can be produced according to two different embodiments. According to a first embodiment of the invention, the detection device 4 comprises a receiver 3 suitable for receiving an electromagnetic signal emitted by a remote terminal outside the vehicle, and a system 5 for geographic location of the vehicle. The receiver 3 can, for example, be constituted by a car radio. In this case, the electromagnetic signal is a radio signal, for example a “TMC” signal (acronym of the term “Traffic Message Channel” in English). TMC is a European standard for broadcasting traffic information to motorists, usually via RDS (acronym for "Radio Data System") in the radio frequency band "FM". The remote terminal is the radio transmitting device. The receiver 3 can also be constituted by a mobile phone type receiver. In this case, the electromagnetic signal is a signal from the mobile telephone network, for example a “GSM” signal (acronym of the term “Global System for Mobile communication” in English), a “GPRS” signal (acronym of the term “Global Packet Radio Service "in English), or a" UMTS "signal (acronym for" Universal Mobile Telecommunication System "in English). The wireless telecommunications network is, for example, defined by the standard “3G” or “4G”. The receiver 3 can, moreover, be constituted by a computer or tablet type receiver. In this case, the electromagnetic signal is a signal from a computer network governed by a communication protocol of the type defined by the IEEE 802.11 standard, for example the signal "Wi-Fi" (acronym of the term "Wireless Fidelity" in English). Finally, the receiver 3 can be constituted by a camera. The electromagnetic signal is then a light signal of the “VLC” type (acronym of the term “Visible Light Communicvation” in English). This signal has a wavelength between 400 and 700 nanometers. It can be emitted by traffic lights or street lighting infrastructure such as street lamps. The location system 5 is suitable for determining the geographical position of the vehicle. It is for example constituted by a “GPS” device (acronym of the term “Global Positioning System” in English). According to a second embodiment of the invention not shown, the detection device 4 comprises a camera capable of imaging the roadway and a processor connected to the camera and suitable for analyzing the images captured by the camera to determine the presence of a roughness on the roadway such as for example the presence of a hole on the roadway. The storage unit 6 is a ROM, UVPROM, PROM, EPROM or EEPROM type memory. It can store images, each representative of a pictogram. A pictogram is a graphic sign representative of a situation whose meaning is likely to be understood quickly. The pictogram includes a figurative drawing and / or alphanumeric symbols. The pictograms can for example represent one or more road studs, a road sign, guide lines or arrows or site stakes. The storage unit 6 is also capable of storing the coordinates of the position of the conductor in a predefined reference frame, called the projection reference frame Rp. This reference frame Rp has been shown in FIG. the coordinates of the position of the driver's eyes in this frame of reference Rp. This position is an average position established from the, position of the eyes of several drivers having different sizes or different body sizes. The processing unit 10 is a processor type calculation unit. The projection system 2 according to the invention further comprises a light source 16 capable of emitting a light beam, an imaging device 18 capable of imaging a digital image from the light beam coming from the light source 16 and from the control signal, and a projection device 20 shaped to project the image on the road. The light source 16 is, for example, constituted by a light-emitting diode and a collimator. As a variant, the light-emitting diode is replaced by a laser source. The imaging device 18 is, for example, constituted by a matrix of micromirrors. The micro-mirror array is generally known by the acronym DMD (from the English "Digital Micro-mirror Device"). It includes a large number of micro-mirrors distributed in rows and columns. Each micro-mirror is suitable for receiving a part of the light beam emitted by the light source 16 and for reflecting it in the direction of the projection device 20 or in the direction of a light absorber. All of the micro-mirrors are suitable for projecting a digital image. The projection device 20 generally comprises an input lens and an output lens. These lenses are made of plastic and / or glass. The output lens is for example a converging lens. According to a variant not shown, the storage unit 6 does not include the coordinates of the position of the conductor in the projection reference frame Rp. According to this variant, the projection system comprises an imager connected to the processing unit. The imager is capable of imaging the driver of the motor vehicle. The imager is, for example, constituted by a camera or a camera. The anti-sleep device camera (called “driver monitoring”) could be used. The processing unit is able to find the position of the driver on the image captured by image processing. This image processing is carried out for example using an edge detection. In particular, the processing unit searches for the position of the driver's eyes on the captured image. Then, the position of the driver's eyes is defined in a frame of reference located on the projection device Rp. Referring to Figure 2, the projection method according to the present invention begins with a step 30 of detecting a disturbance zone. This detection step 30 can be carried out according to two embodiments. According to the first embodiment of the invention illustrated in FIG. 3, the detection step includes a step 32 of determining the geographic position of the motor vehicle. This geographic position is defined on a road map such as that generally used in GPS devices. During a step 34, the detection device 4 receives an electromagnetic signal suitable for indicating a geographical area of disturbance on the roadway, as well as information relating to the type of disturbance. During a step 36, the detection device determines whether the vehicle will pass through the disturbance zone 38 detected by the detection device 4. If this is the case, the detection device 4 generates an alert signal during from a step 40. During a step 42, the detection device 4 sends the alert signal to the processing unit 10. The alert message contains information of the type of disturbance and its precise location on the road, for example disturbance only on the shoulder or on the central reservation or disturbance on the right lane. On receipt of this alert signal, the position of the driver is determined in the projection reference frame Rp, during a step 44. This determination is carried out by searching for coordinates in the storage unit 6. In particular, the Coordinates of the driver's eye position are sought. During a step 46, the processing unit 10 is able to calculate a transformation matrix M as a function of the position of the driver's eyes in the projection reference frame Rp. This transformation matrix M is shaped so as to generate a transformed image. The transformed image appears to the conductor as if it extended in a vertical plane PV, as shown in FIG. 4. The transformation matrix M is shaped to rotate the stored image by an angle of 90 ° relative to to a horizontal axis A extending on the roadway perpendicular to the direction of movement of the vehicle. ίο Thus, the driver of the vehicle does not have the impression of visualizing an image lying flat on the roadway in the zone ZP defined in bold in FIG. 4, but he has the impression of visualizing an image which is would extend vertically over the zone referenced I in FIG. 4. In reality, the image is well projected on the zone ZP of the roadway. One way to calculate this transformation matrix was the subject of the filing of a previous patent application on September 13, 2016 under number PCT / EP2016 / 071596. This previous patent application has not yet been published. This prior patent application was copied at the end of the description of the present patent application to give an example of implementation of the present invention. During a step 48, the processing unit 10 selects an image representative of a particular pictogram from all of the images stored in the storage unit as a function of the information contained in the alert message transmitted by the detection device 4. The selected image is transmitted from the storage unit 6 to the processing unit 10. Thus, when the disturbance relates to works along the aisles of the roadway, the processing 10 selects an image representative of an alignment of road studs, as shown in FIG. 5. During a step 50, the transformation matrix M is applied to the image coming from the storage unit. This application is carried out by multiplying the transformation matrix by the stored image. This multiplication allows to distort the projected image to give a transformed image. The transformed image gives the visual impression of a real pictogram-shaped object placed on the roadway and extending vertically. This application is carried out by multiplying the transformation matrix by the stored image. During a step 52, the processing unit 10 adds shaded areas to the transformed image to give the driver the visual impression that the pictogram represented on the transformed image is in relief. This addition of shadow is achieved by known image processing techniques. FIG. 5 thus illustrates an example of the visual impression perceived by the driver of the motor vehicle of three road studs bordering the lower side of the roadway. This visual impression is obtained by implementing the projection method according to the present invention. During a step 54, the processing unit generates a control signal representative of the transformed image. During a step 56, the control signal is transmitted to the imaging device 18 which images the transformed image and projects it to the projection device 20. During a step 58, the projection device 20 projects the transformed image onto the roadway. The projected pictogram appears in a vertical plane PV for the driver. Other observers such as vehicle passengers or people inside the vehicle view a distorted image. They do not necessarily perceive the pictogram. The detection step 30 can be implemented by the detection device according to the second embodiment of the invention. In this case, the detection step comprises, with reference to FIG. 6, a step 60 of acquiring at least one image representative of the roadway by a camera fixed to the front of the vehicle. The acquired image is transmitted to the detection device 4. During a step 62, the detection device processes this image to determine whether there is an asperity on the pothole type ground. In the event of an asperity being present, the detection device 4 detects the existence of a disturbance zone and generates an alert signal during a step 64. The alert signal contains information of the type of disturbance. The alert signal is transmitted to the processing unit 10 and the steps of the projection method continue in the same manner as described above for the projection method according to the first embodiment of the invention. According to a variant of step 44 not shown, the processing unit 10 controls an imager so that it acquires an image of the driver seated in the motor vehicle. The captured image or images are transmitted to the processing unit 10. Then, the position of the driver and in particular the position of the driver's eyes is sought on the image captured by image processing. This image processing is carried out by the processing unit 10, for example using contour detection. Then, the position of the driver's eyes is defined in a frame of reference located on the projection device 10. This frame of reference is called the projection frame of reference Rp. It is shown in FIG. 4. According to a particularly advantageous variant, the contrast profile of the projected pictogram is reinforced with respect to the average light environment of the background beam, on which or in which the pictogram is included. To this end, the borders of the pictogram, from the outside thereof towards the inside and in at least one dimension (width or height) of the projection plane of the pictogram, alternate between at least two zones of different intensity compared to the average intensity of the background beam, a first zone being of stronger or lower intensity compared to this average intensity and the second zone being respectively of weaker or stronger intensity compared at this average intensity. In an alternative embodiment, the second zone constitutes the heart or central zone of the pictogram and is then bordered at least in one dimension by the first zone. Thus, the perception by the driver or third parties of the message constituted by the projected pictogram is reinforced, the reaction time compared to the projected message is reduced and driving safety is in fact improved. The intensity gradient and the intensity level applied may be constant or vary along the pattern in a direction of the projection dimension considered (width or height, for example from left to right respectively or from bottom to bottom high, corresponding to a near field projection of the vehicle towards the horizon). In addition, this variation can be static or dynamic, that is to say controlled according to the environment of the vehicle: for example, depending on the imminence of an event, the contrast can be dynamically reduced or increased , so as to generate a ripple effect of the pattern which will appear more or less clear in the background beam and to draw the attention of the driver or third parties to the imminence of the event corresponding to the projected pictogram ( exit or turn arrow, collision alert, pedestrian crossing, etc.). This further improves driving safety. We have copied the patent application number PCT / EP2016 / 071596 below. Patent application PCT / EP2016 / 071596 and its various applications will be better understood on reading the description which follows and on examining the figures which accompany it. - Figure 7 shows a flow diagram of the steps of the method of projecting at least one image onto a projection surface according to a non-limiting embodiment of patent application PCT / EP2016 / 071596; - Figure 8 shows a motor vehicle comprising a lighting device suitable for implementing the projection method of Figure 7 according to a non-limiting embodiment of patent application PCT / EP2016 / 071596; - Figure 9 shows a light intensity map established according to a step of the projection method of Figure 7 according to a non-limiting embodiment of patent application PCT / EP2016 / 071596; - Figure 10 shows a projector that integrates a light module and the direction of a light beam of a light module of said projector, said light module being adapted to perform at least one step of the projection method of Figure 7; - Figure 11 shows a flowchart illustrating sub-steps of a step of establishing a luminance map of the projection method of Figure 7 according to a non-limiting embodiment of the patent application PCT / EP2016 / 071596 ; - Figure 12 shows the projector of Figure 10 and a point of impact of the light beam on the ground; - Figure 13 shows the projector of Figure 12 and the illumination of the point of impact; - Figure 14 indicates the site angle and the azimuth angle taken into account in a step of calculating the observation position of an observer of the projection method of Figure 7; - Figure 15 schematically shows an impact point, an observation position of an observer outside the motor vehicle in an image reference frame and an image to be projected by the projection method of Figure 7; FIG. 16 illustrates an image projected according to the projection method of FIG. 7, an image which is seen from the point of view of the driver of said motor vehicle but which is only understandable by an observer outside the motor vehicle; FIG. 17 illustrates an image projected according to the projection method of FIG. 7, an image which is seen from the point of view of a rear passenger of said motor vehicle but which is only understandable by an observer outside the motor vehicle; FIG. 18 illustrates an image projected according to the projection method of FIG. 7, an image which is seen from the point of view of said observer outside the motor vehicle and which is understandable by said observer outside the motor vehicle; - Figure 19 shows a flowchart illustrating sub-steps of a step of defining the coordinates of a projection of a luminance point of the projection method of Figure 7 according to a non-limiting embodiment of the patent application PCT / EP2016 / 071596; - Figure 20 shows schematically the point of impact, the observation position of the observer outside the motor vehicle and the image to be projected of Figure 15 by the projection method of Figure 7 and the coordinates of the intersection between the point of impact and the image to be projected; - Figure 21 shows schematically the point of impact, the observation position of the observer outside the motor vehicle and the image to be projected from Figure 20 normalized; and - Figure 22 schematically represents pixels of the image to be projected in Figure 20; and - Figure 23 illustrates a lighting device suitable for implementing the projection method of Figure 7. DESCRIPTION OF MODES FOR CARRYING OUT THE PATENT APPLICATION PCT / EP2016 / 071596 Identical elements, by structure or by function, appearing in different figures keep, unless otherwise specified, the same references. The MTH projection method for a motor vehicle of at least one image on a projection surface by means of an ML light module according to patent application PCT / EP2016 / 071596 is described with reference to FIGS. 7 to 23. By motor vehicle is meant any type of motor vehicle. As illustrated in FIG. 7, the MTH process comprises the steps of: - detect an observation position PosO1 of an observer O in a light module RP reference frame (illustrated step DET_POS (O, PosO1, RP)); - calculate the observation position PosO2 of the observer O in an image reference frame Rl (illustrated step DET_POS (O, PosO2, RI)); - projecting said image Ip onto said projection surface S as a function of said observation position PosO2 of the observer O in said image reference frame Rl, said image Ip being integrated into said light beam Fx of the light module ML (illustrated step PROJ ( Fx, Ip, S)). As illustrated in FIG. 7, the projection of said image Ip comprises the substeps of: - 3a) from a light intensity map CLUX of the light module ML comprising a plurality of intensity indicators pf, calculate a map of luminance CLUM on the projection surface S resulting in luminance points pi (step illustrated CALC_CLUM (CLUX, S, pl)); - 3b) calculate the position PosL2 of each luminance point pi in the image reference frame Rl (illustrated step CALC_POS (pl, PosL2, O, RI)); - 3c) from its position PosL2 and from the observation position PosO2 of the observer O in said image reference frame Rl, define the coordinates ply, plz of the projection plr of each luminance point pl in the image plane P1 of said image to be projected Ip (illustrated step DEF_PLR (plr, P1, PosL2, PosO2)); - 3d) if said projection plr belongs to said image to be projected Ip, define coordinates lig, col of the corresponding pixel Pix (illustrated step DEF_PIX (pl (lig, col), ply, plz); - 3e) for each projection plr of a luminance point pl belonging to said image to be projected Ip, correct the intensity value Vi of the corresponding intensity indicator pf as a function of the color Co of the corresponding pixel Pix (step illustrated MOD_PF (pf, Vi, Pix, Co)). Note that the first step 3a in particular, as well as step 3b in particular, can be carried out prior to the iterations of the following steps. More generally, the steps described are not necessarily carried out sequentially, i.e. in the same iteration loop, but can be the subject of different iterations, with different iteration frequencies. The step of projecting the image lp further comprises a sub-step 3f) of projecting onto the projection surface S the light beam Fx with the intensity values Vi corrected for the intensity indicators pf (step illustrated on the Figure 7 PROJ (ML, Fx, Vi, pf)). The MTH projection method is suitable for projecting one or more lp images at the same time. In the following description, the projection of a single image is taken as a non-limiting example. Note that the projection can be done at the front of the motor vehicle V, at the rear or on its sides. The light module ML makes it possible to produce a light beam Fx, said light beam Fx comprising a plurality of light rays Rx which follow different directions. The light module ML makes it possible to modify the intensity value Vi of each intensity indicator pf, it is therefore a digital light module. As described below, the image to be projected lp is thus integrated into the light beam Fx of the light module ML. Note that the CLUX light intensity card is discretized so that it can be used digitally. The light module ML is considered to be a point light source from which the space around said light source is discretized. Thus, an intensity indicator pf is a point in the space illuminated by the light module ML which is has a certain direction dir1 and a given intensity value Vi provided by the light module ML in said direction dir1. The direction dir1 is given by two angles Θ and δ (described below). In a nonlimiting embodiment, the projection surface S is the ground (referenced S1) or a wall (referenced S2). The image that will be projected lp on the floor or wall is thus a 2D image. In a nonlimiting embodiment illustrated in FIG. 8, a lighting device DISP of the motor vehicle V comprises at least one light module ML and is adapted to implement the projection method MTH. In the nonlimiting example illustrated, the lighting device is a projector. As will be seen below, the observation position of the observer O is taken into account for the projection of the image to be projected Ip. For this purpose, the image to be projected Ip will be distorted so that it can be understood by the observer in question, whether it is the driver or a front or rear passenger of the motor vehicle or an observer outside the motor vehicle. We thus place ourselves from the point of view of the observer O for whom we want to project the image Ip. From the viewer's point of view, the Ip image will not be distorted. From a different point of view from said observer, the Ip image will be distorted. In nonlimiting exemplary embodiments, an observer O outside the vehicle is a pedestrian, a driver of another motor vehicle, a cyclist, a biker, etc. It can be located at the front, at the rear or on one of the sides of the motor vehicle V. In a nonlimiting embodiment, the projected image Ip comprises at least one graphic symbol. This graphic symbol will make it possible to improve the comfort and / or the safety of the observer O. In a nonlimiting example, if the observer O is the driver of the motor vehicle, the graphic symbol may represent the speed limit not to overtaking on the road, a graphic STOP symbol when the motor vehicle is reversing and an obstacle (pedestrian, wall etc.) is too close to the motor vehicle, an arrow which helps it when the motor vehicle is about to turn on a road etc. In a nonlimiting example, if the observer O is outside the motor vehicle such as a pedestrian or a cyclist, the graphic symbol may be a STOP signal to indicate to him that he must not cross in front of the motor vehicle because the latter will restart. In a nonlimiting example, if the observer O is outside the motor vehicle such as a follower motor vehicle, the graphic symbol may be a STOP signal when the motor vehicle in question brakes so that the driver of the follower vehicle brakes in turn. In another nonlimiting example, if the observer O is outside the motor vehicle and is a motor vehicle which protrudes from the side, the graphic symbol may be a warning symbol to indicate to said motor vehicle to fall back because another vehicle automobile arrives opposite. As shown in Figure 8, the projected image lp is a STOP symbol. It is oriented on the projection surface S, here the ground in the nonlimiting example illustrated, so that the observer O can see and understand this STOP symbol. In the nonlimiting example illustrated, the projection is made in front of the motor vehicle V and the observer O is outside the motor vehicle V. The different stages of the MTH projection process are described in detail below. • 11 _ _Déteçtiqn_ _de_ _la_ .position _ _d_p_b_s_ery_a_tion _ _d_e _ J’p_b_ser_v_ate_u_r_ _dan_s_ _ le. RP light module repository To detect the observation position PosO1 of the observer O in the light module repository RP, it is necessary to detect the position of the observer O himself in the light module repository RP. For this purpose, in a nonlimiting example, a camera (not shown) is used. It is suitable for detecting and calculating the position of an observer O which is outside the motor vehicle V. In nonlimiting embodiments, the camera is replaced by a radar, or a lidar. For an observer O who is inside the motor vehicle (driver or passengers), we consider for example reference observation positions. Thus, in a nonlimiting example, it is considered that the driver's eye is at the position PosO1 (1.5; -0.5; 1) (expressed in meters) of the light module ML in the case of a vehicle automobile which is a car. Of course, if the motor vehicle is a truck, the position of the eye relative to the light module ML is different. For an outside observer, from the position of said observer O, we can deduce its observation position PosO1 which corresponds to the position of his eye. For example, we locate the position of his eye about 1.5 meters from the ground. Since such detection of the position of the observer is known to a person skilled in the art, it is not described in detail here. • 2) _Ça | çul_ de. | a _posi_tLQn jd’pb_s_en / ation_de_rQbseryateur _dans_l_e_ image repository. RI The observation position PosO1 of the observer O was previously determined according to the light module RP reference system. It will be used for the change of reference described below. This step changes the coordinate system. We go from the light module RP reference (defined by the axes pjx, pjy, pjz) to the image reference frame RI (defined by the axes Ix, ly, Iz) of the image to be projected lp. The calculation of the observation position PosO2 of the observer O in the image reference frame RI is based on at least one transformation matrix M from the light module frame of reference RP to said image frame of reference RI. In a nonlimiting embodiment, the PosO2 position is of the form: In a nonlimiting embodiment, said at least one transformation matrix M takes into account at least one of the following parameters: - the Poslp position of the image to be projected lp in the RP light module repository; - Rotlp rotation of the image to be projected lp in the RP light module repository; - the scale of the image to be projected lp The position Poslp of the image to be projected lp is deduced from the light module repository RP according to a translation along the three axes pjx, pjy, pjz of said light module repository RP. In a nonlimiting embodiment, the transformation matrix M is of the form: LûijoU where a, e and i are the affinity terms; b, c, d, f, g and h the terms of rotation; and t, u and v the terms of translation. The terms of affinity a, e and i make it possible to achieve an enlargement or a shrinking of the image lp, for example one increases the total size (homothety) by 50% or one reduces it by 20 %%, while increasing by 50 %, respectively by reducing by 20%, the values of a, e and i. By way of example, a value of a, e and i equal to 1 corresponds to a predetermined reference dimension of the projected image, respectively along the directions pjx, pjy and pjz. It is also possible to apply the factors d 'enlargement or shrinking according to only one of the dimensions, or two of the dimensions (not homothetic). It is also possible to apply different enlargement or shrinking factors on certain dimensions compared to others, in particular it is possible to apply different enlargement or shrinking factors on each dimension, this way, depending on the position PosO2 of the eye of the observer O, we can decide to project an image that appears to the observer O more or less large overall or according to some of the dimensions, depending on whether the values of a, e and i increase or decrease respectively. . Note that the Rotlp rotation depends on three angles which are as follows: - β: azimuth (which indicates whether the image to be projected is located to the right or to the left of the observer, for example when the latter looks to the right or to the left); - Ω: tilt (which indicates the inclination of the image to project Ip, for example when the observer tilts his head to the side. This amounts to tilting the image Ip); - ε: site (which indicates the effect that we want to give to the graphic symbol of the Ip image). FIG. 14 illustrates the angles site, and azimuth and the plane P1 of the image to be projected Ip. We thus have PosO2 = M * PosO1. PosO1 is the observation position of the observer O used for the projection of the image Ip in the reference light module RP. PosO2 is the observer O observation position used for the projection of the Ip image in the RI image reference frame. Thus, the position and rotation of the image to be projected Ip are adapted as a function of the observer O. In this way, the image to be projected Ip will be understandable by the observer O. This gives an affine deformation of the image of the point of view that one desires, called anamorphosis. Thus, for the eye of a car driver, the projected image lp is not distorted. Similarly, for the eye of a truck driver, even though it is positioned well above the RP light module reference frame, the projected image lp is also not distorted. Finally, for an outside observer, the projected image lp is also not distorted. Note that the projected image lp can thus be clearly visible to the observer since its projection depends on the observation position of the observer O and that we can modulate its scale as desired. Thus, even if it is far from the motor vehicle, the observer O will still be able to understand and see the graphic symbol (s) of the projected image lp. • 31 _Pr_ojeçt ip_n_ de. Γimage] p_s_ur_ la su_rfaçe_dejçrpjeçt ion This_tap_e includes.the following spus-_é_tape_s ,: • 3aLÇa [cul_dO_ne_cartog_rap_hje_deJumi_n_ançe_ÇL_UM In a nonlimiting embodiment, the light intensity card CLUX is stored in a memory. It will have previously been established during product design, using a goniophotometer (not shown). The goniophotometer is for example of type A, that is to say that the rotational movement around the horizontal axis supports the rotational movement around the vertical axis adjusted for rotation around the horizontal axis. The CLUX light intensity card gives the intensity indicators pf of the light module ML considered as a point light source. The direction dir1 of a light ray Rx starting from the light module ML is expressed as a function of two angles Θ and δ and is given by the following formula: ÇQ £ v '· cïrecrion = I £ in. 6 I '• cos6 - sm J' With δ the vertical rotation V of the goniophotometer; and Θ the horizontal rotation H of the goniophotometer. The light intensity map CLUX thus comprises a plurality of indicators of intensity pf whose direction dir1 is given by the above formula, with Θ the horizontal angle of the indicator of intensity pf, and δ l vertical angle of the intensity indicator pf. The CLUX light intensity card is shown in the figure 9. We can see there an intensity indicator pf of polar coordinates ô = 0V, θ = 0Η. The CLUX light intensity card thus makes it possible to determine an intensity Ι (θ, δ) for a given direction. We thus have: CLUX = {(δί, Oj, Iij), (i, j) e [l, M] x [l, N]}, where M and N are the numbers of discretization points (or indicators of intensity) of the beam luminous Fx in the vertical and horizontal directions (respectively). An intensity indicator pf is therefore defined by its direction dir1 and its intensity Ι (θ, δ). FIG. 10 illustrates a lighting device DISP comprising a light module ML with the direction of a light ray Fx. The calculation of the CLUM luminance mapping on the projection surface S comprises the following substeps illustrated in FIG. 11. - i) a first calculation of the position POSpf of said intensity indicators pf on the projection surface S resulting in impact points pi (illustrated step CALC_POSF (pf, POSpf, pi)); - ii) a second calculation of a CECL illumination mapping of said impact points pi (illustrated step CALC_CECL (pi, CECL)); - iii) a third calculation of the CLUM luminance mapping of said impact points pi from the illumination mapping CECL resulting in said luminance points pi (illustrated step CALC_CLUM (pi, CECL)). The different sub-steps are detailed below. It will be noted that the calculations below are carried out as a function of the projection surface S (ground S1 or wall S2). o sub-step i) The first calculation is based on: - the position POSpj of the light module ML in the Cartesian coordinate system x, y, z; and - the direction dir1 of said intensity indicators pf described above. For soil S1, we thus obtain the position POSpfl of the intensity indicator pf on the soil in the Cartesian coordinate system x, y, z with the following formula. POSpfl = POSpj - (POSpj.z / dir1.z) * dir1. With POSpj.z, the z value of the position of the light module ML (height of the light module above the ground) and dirl.z, the z value of the directing vector of the light ray Rx. For wall S2, we thus obtain the position POSpf2 of the intensity indicator pf on the wall in the Cartesian coordinate system x, y, z with the following formula. POSpf2 = POSpj - (D / dir1, x) * dir1. With - dir1 .x, the value x of the directing vector of the light ray Rx; - D, the distance between the light module ML and the wall. In a nonlimiting example, D is equal to 25 meters. This gives an impact point pi (in position POSpfl or POSpf2) on the ground S1 or on the wall S2. FIG. 12 illustrates a nonlimiting example of point of impact pi on a projection surface S which is the ground S1. o sub-step ii) Once the point of impact pi on the ground S1 or on the wall S2 has been determined, the illumination E of this point of impact pi is calculated from the intensity Ι (θ, δ) of the indicator of intensity pf previously determined. For the ground S1, the illumination Er of the point of impact pi on the ground is thus obtained with the following formula. E r = - (Ι (θ, δ) / dist1 2 ) * cos0 * sinô With distl, the distance between the point of impact pi and the light module ML. For the wall S2, the illumination E M of the point of impact pi on the wall is thus obtained with the following formula. Em = (I (0, ô) / dist1 2 ) * cos0 * cosô With distl, the distance between the point of impact pi and the light module ML. FIG. 13 illustrates the illumination E (delimited by a dotted circle) of an impact point pi on a projection surface S which is the ground S1. o sub-step iii) The third calculation is based on: - the illumination E of said impact points pi; an eye / eye position vector between the position of an impact point pi of the CECL illumination mapping and the observation position PosO1 of the observer O (in the light module RP reference frame); and - a light scattering function d. d is a known function which makes it possible to calculate the scattering of light by the projection surface S. It will be noted that it varies as a function of the nature of the projection surface S. For example, the function d is different if the surface is asphalt, concrete, tar, pavers etc. For the ground S1, the luminance Lr of the point of impact pi on the ground is thus obtained with the following formula. £ aiiiiiiniÎ fr T · '-' 7 Ë With Eye the z value of the normalized Eye vector. For the wall S2, the luminance L M of the point of impact pi on the wall is thus obtained with the following formula. Moeil Il-iï - x Λ ΛΛ Ί '/ 7 With 11 11 the value x of the normalized Moeil vector. In a nonlimiting embodiment, it is assumed that the projection surface S emits uniformly in all directions. In this case, the diffusion parameter d does not depend on the angles δ and Θ. In a nonlimiting embodiment, the projection surface S is considered a Lambertian diffuser (for example a gray body). We then have a constant luminance on the projection surface S proportional to the illumination E and in this case the diffusion function d is a cosine. In this case, L R = a /% Er because material. and I_m - a / π Em In nonlimiting examples, the albedo of asphalt is 7%, and that of concrete varies between 17% to 27%. • B] _Ç_aJçu | _des_pqsitioris_ de_s_p_oin_ts_de_luminance jol_ in Je. i_macje_ RI repository The position PosL1 of a luminance point pl was previously determined according to the RP light module reference system. It will be used for the change of reference described below. In the same way as for the calculation of the observation position PosO2 of the observer O, this step makes a change of reference. We indeed go from the light module repository RP (defined by the axes pjx, pjy, pjz) to the image repository RI (defined by the axes Ix, ly, Iz) of the image to be projected Ip. The calculation of the position PosL2 of a luminance point pl in the image reference frame RI is based on said at least one transformation matrix M from the light module reference frame RP to said image reference frame RI (transformation matrix M described above). In a nonlimiting embodiment, the PosL2 position is of the same form as the PosO2 position described above: It will be noted that the transformation matrix M has been described during the calculation of the observation position PosO2 of the observer O in the image reference frame RI. It is therefore not detailed again here. We thus have PosL2 = M * PosL1. PosL1 is the position of the luminance point pl in the reference light module RP. PosL2 is the position of the luminance point pl in the image reference frame RI. Figure 15 illustrates the image to project lp as well as the RI image repository. We can also see the luminance point pl and the eye of the observer O (which corresponds to the observation position) with their respective positions PosL2 and PosO2 defined in the image reference frame RI. It will be noted that although the image projected lp on the floor or the wall is in 2D, (two dimensions), one can obtain a 3D effect (three dimensions), that is to say an effect of perspective or of trunk the eye, by adjusting the site angle ε seen above. Observer O (whether the driver, a passenger, or an outside observer) will see the image in perspective. For this purpose, the site angle ε is greater than -90 °. In particular, it is greater than -90 ° and less than or equal to 0 °. The 3D effect is thus visible between 0 and up to -90 ° (not included). Note that at -90 ° the IP image is placed on the ground and therefore has no 3D effect. Figures 16 to 18 illustrate a projected image lp which is a pyramid. An observer O who is outside the motor vehicle, such as a pedestrian, is taken as a non-limiting example. The pyramid is visible from three particular points of view, the driver's point of view (figure 16), the point of view of a rear passenger (figure 17) and the point of view of the pedestrian (figure 18), but n ' is seen in effect 3D only from one point of view. In the nonlimiting example illustrated, only the pedestrian will see the pyramid in 3D (as illustrated in FIG. 18). From the driver's or passenger's point of view, the pyramid appears distorted. In a nonlimiting variant, the site angle ε is equal to 0. The observer O looks straight ahead. In this case, observer O will see the image, namely the pyramid here, as if it were standing. In a nonlimiting variant, the site angle ε is substantially equal to -35 °. This allows for a 3D effect raised in the direction of the road. The plane P1 of the image lp is thus perpendicular to the direction of observation of the observer O. If the site angle ε is different from -90 °, the pyramid will be visible in 3D but more or less inclined. • 3ç) .definé _le_s_ .coordinées joly, _ plz .de. the .projection, pjr. .a, jooin.t. from Lu.minançe.pl As illustrated in FIG. 19, in a nonlimiting embodiment, the definition of the coordinates ply, plz of a projection plr of a luminance point pl comprises the sub-steps of: - i) calculate the point of intersection Int between (illustrated sub-step CALC_INT (PosO2, PosL2, P1)): - the line V (PosO2, PosL2) passing through the observation position PosO2 in said image reference frame RI of the observer O and through the position PosL2 in said image reference frame RI of said luminance point pl; and - the image plane P1 of the image to be projected lp. - ii) determine the coordinates ply, plz of said point of intersection Int from the dimensions L1, H1 of said image to be projected lp (illustrated substep DEF_COORD (lnt, L1, H1). These two substeps are described below. o sub-step i) In the image reference frame RI, the point of intersection Int between the line (eye, luminance point) and the image plane P1 is the point on the line (eye, luminance point) for which Ix = 0. We thus have: Int = PosO2 - ((PosO2.x) / (V (PosO2, PosL2) .x)) * V (PosO2, PosL2) With - V (PosO2, posL2) the vector representing the line (eye, luminance point) in the image reference frame RI; - V (PosO2, posL2) .x the x value of the vector; - Int the point of intersection between the line (eye, pl) and the image to project lp in the RI image repository. The intersection point Int is thus the projection plr of the luminance point pl on the image plane P1 of the image to be projected lp; - PosL2.x the value x of the position of the luminance point pl; - PosO2.x the x value of the observer's observation position. Note that we assume that the observation position of the observer O is placed on the axis Ix. FIG. 20 illustrates the image to be projected lp, the point of intersection Int which corresponds to the projection plr of the point of luminance pi on said plane P1 and the vector V (posO2, posL2) (illustrated in dotted lines). Note that the projection plr is of the central type, so as to produce a conical perspective effect. Thereafter, the term plr projection or central projection plr will be used interchangeably. o sub-step ii) The coordinates ply, plz of the central projection plr of the luminance point pi in the image frame RI correspond to the coordinates along the ly (vertical) axis and along the Iz (horizontal) axis of the position of the determined Int intersection point previously. In a nonlimiting embodiment, they are expressed in meters. We deduce the coordinates of this point in the frame of figure 20 by the following formulas: ply = (lnt.y + (L1 / 2)) / L1 plz = lnt.z / H1 With, - L1 the width of the image to be projected lp (expressed in meters in a nonlimiting example); - H1 the height of the image to be projected lp (expressed in meters in a non-limiting example); - Int.y the y value of the intersection point; - Int.z the z value of the intersection point. Figure 20 illustrates the definition of ply and plz coordinates in meters in the RI image repository. Note that L1 and H1 are input parameters of the MTH projection process. This sub-step makes it possible to determine subsequently whether the coordinates ply, plz belong to the image to be projected lp (they must then be between 0 and 1) and therefore whether the central projection plr of the luminance point pi belongs to l image to project lp. To this end, in a nonlimiting embodiment, the image to be projected lp and the coordinates of the projection plr thus calculated are normalized. This simplifies the test of belonging to the image to be projected lp. This gives a normalized frame of reference IX (vertical axis), IY (horizontal axis) as illustrated in FIG. 21. The value of the coordinates ply, plz of the projection plr is now between 0 and 1. In the example illustrated, the ly and Iz axes which have become the IX and -IY axes respectively. Image dimensions H2, L2 are thus obtained between 0 and 1. Figure 21 illustrates the definition of ply and plz coordinates in unitless values in the RI image repository. Note that the size (L1, H1) of the image to project lp can be defined in this step 3c) or in the step with the transformation matrix M. Since the dimensions L1 and H1 and therefore L2, H2, the position and the rotation of the image to be projected lp are known (these are input parameters of the MTH projection method), one can easily determine, via its coordinates ply, plz, whether or not the projection pi belongs to the image to be projected lp. • D _). Define the corresponding _c_o_qr_d_qnnée_s_du_pixel_P ix The definition of the line (lig), column (col) coordinates of the pixel Pix is carried out for each projection plr (of luminance point pi) which belongs to the image to be projected lp, namely which is located inside the rectangle L2 * H2 of the image to be projected lp, which was verified in step 3c-ii). Thus, if the projection plr belongs to the image to be projected lp, the coordinates of the corresponding pixel Pix are calculated. They are calculated as follows. Lig = - plz * L2 Col = ply * H2 With, - lig, the line of the pixel; - col, the pixel column; - L2 the width of the image to be projected lp (this time expressed in pixels); - H2 the height of the image to be projected lp (this time expressed in pixels); - ply the coordinate of the plr projection along axis IX; - plz the coordinate of the projection plr along the axis IY. • E) _ Corr_e_çtion__d_e_ ja __v_a_leu_r__d’int intensity__d_e _ [‚indicator _djntensity__pf çorre ^ ondant With the coordinates lig, col of the pixel Pix, we can retrieve the value of its color Co in the image that we want to project. In a nonlimiting example, the value is between 0 and 255. It is thus possible to go from white to black by passing through several shades of gray as illustrated in FIG. 22. By the term white, it is necessary to understand any single color and by the expression of shades of gray must be understood the shades obtained from said single color between its lightest shade and black. Thus the projected image is not necessarily composed of the color white and shades of gray associated with the Co values between 0 and 255, but more or less dark shades of any color visible by the human eye. Advantageously, it is white, yellow, blue, red or amber. We then correct the intensity value Vi of the corresponding intensity indicator pf. Note that this is possible because the light module ML is digitalized. In a first nonlimiting embodiment, the correction is carried out as follows: Vi = σ. ViO * Co / 255. With: - ViO the initial intensity value of the intensity indicator pf of the light module, - Co the color of the corresponding Pix pixel; and - σ a maximum over-intensification factor. In a second non-limiting embodiment, the correction is carried out as follows: Vi = (p.Co, with φ a luminance coefficient. This performs a substitution of the luminances. This makes it possible to display the image on a background independent of the basic light distribution. This step is carried out for all the luminance points pl whose central projection plr belongs to the rectangle L2 * H2 of the image to be projected Ip. Thus, the light module ML can project onto the projection surface S the light beam Fx comprising the light rays Rx with the intensity values Vi corrected for the intensity indicators (step 3f) illustrated in FIG. 7 PROJ (ML, Fx , pf, Vi). This will display the correct Co color for the intensity indicator being considered. In this way, the image to be projected Ip is integrated into the light beam Fx of the light module ML (since it is produced by said light module ML itself) and is projected onto the projection surface S with the right colors. Thus, as a function of the desired color Co of a pixel Pix, a correction factor is applied to the intensity value Vi of the corresponding intensity indicator pf. Thus, one can obtain intensity indicators whose color does not depend on the light intensity of the light beam Fx itself. For example, the projected pyramid illustrated is of uniform color. In the case of a light source independent of the light module ML which projects said pyramid superimposed on said light beam, this would not be the case. The pixels of the image would be more or less illuminated depending on the distribution of the light intensity of said light beam. Their color would thus vary according to the light intensity of said light beam. Furthermore, the fact that the image to be projected lp is integrated into said light beam Fx and not in superimposition makes it possible to obtain a better contrast of the image on the projection surface S than in the case of using d '' an independent light source. In the case of an independent light source, the light beam also illuminates the projected image. The latter is therefore lighter in color. It should be noted that the color value Co of a pixel, or of a series of pixels corresponding to predetermined parts of the projected image, can also be used to enhance the 3D effect. For example, with reference to FIG. 12, the pixels corresponding to the face F1 of the pattern of the projected image and those corresponding to the face F2 of the pattern of the projected image, may include specific and different color values Co. Thus the face F1 appears brighter than the face F2 or vice versa depending on whether the value of the color Co corresponding to the pixels making up the face F1 is higher or lower than that corresponding to the pixels making up the face F2. The value of the color Co corresponding to the pixels making up the face F1 and / or F2 can also vary so as to produce a gradient effect, for example from one edge to the other of the face F1 and / or F2, making it possible to further enhance the 3D effect. It is possible to obtain multicolored images using several systems operating according to the method described above and each emitting a visually different color. The images projected by each system are then calculated to project onto the projection surface S in a superimposed manner in order to obtain a global multicolored projected image. It will be noted that as the projection of the image to be projected lp depends on the observation position of the observer O, it is therefore continuously updated as a function of the movement of the observer O relative to the motor vehicle when the latter is outside the motor vehicle and as a function of the movement of the motor vehicle itself when the observer O is inside the motor vehicle. . In a nonlimiting embodiment, the refresh frequency of the calculations presented above is thus a function of the speed of movement of the observer relative to the motor vehicle for the case of an outside observer. The more the speed increases, the more the refresh rate increases. The lower the speed, the lower the refresh rate. In another nonlimiting embodiment, the refresh frequency of the calculations presented above is constant. In a nonlimiting example, the frequency is one second. Thus, these calculations being performed in real time, it is not necessary to have a database with images of the same graphic symbol preloaded in memory corresponding to several imaginable observation positions of the observer relative to the motor vehicle (when outdoors), or in the motor vehicle (when indoors). The MTH projection method thus makes it possible to project one or more lp images on a projection surface S which is not only visible by an observer located inside or outside the motor vehicle but also understandable by him since the image projected lp is oriented in the direction of the gaze of said observer O. It will be noted that in the case where several lp images are projected at the same time, the combination of the different images with the light beam Fx is calculated before projecting the overall result. In a nonlimiting embodiment, the MTH projection method is implemented by a DISP lighting device for motor vehicle V. In a nonlimiting embodiment, the DISP lighting device allows the realization of a regulatory photometric function such as a low beam, high beam or a front, rear and / or side signaling function. Thus, the lighting device is located at the front of the motor vehicle or at the rear. The lighting device DISP is illustrated in FIG. 17. It comprises a processing unit PR and at least one light module ML. In nonlimiting embodiments, the lighting device is a headlamp or a rear light. The PR processing unit is suitable for: - detect an observation position PosO1 of an observer O in a reference light module RP (illustrated function DET_POS (O, PosO1, RP)); - calculate the position of observation PosO2 of the eye of the observer O in an image reference frame RI (illustrated function DET_POS (O, PosO2, RI)); Said lighting device DISP is adapted to project said image Ip onto said projection surface S as a function of said observation position PosO2 of the observer O in the image reference frame RI, said image Ip being integrated into said light beam Fx of ML light module (illustrated function PROJ (Fx, Ip, S). For the projection of said image Ip on said projection surface S, the processing unit PR is further adapted for: - from a light intensity map CLUX of the light module ML comprising a plurality of intensity indicators pf, calculate a map of luminance CLUM on the projection surface S resulting in luminance points pl (illustrated function CALC_CLUM (CLUX, S, pi)); - calculate the position PosL2 of each luminance point pl in the image reference frame RI (illustrated function CALC_POS (pl, PosL2, O, RI)); - from its position PosL2 and from the observation position PosO2 of the observer O in said image reference frame Rl, define the coordinates ply, plz of the projection plr of each luminance point pl on the image plane P1 of said image to project Ip (illustrated function DEF_PLR (plr, P1, PosL2, PosO2)); - if said projection plr belongs to said image to be projected Ip, define the coordinates lig, col of the corresponding pixel Pix (illustrated function DEF_PIX (pl (lig, col), ply, plz)); - for each projection plr of a luminance point pl belonging to said image to be projected Ip, correct the intensity value Vi of the corresponding intensity indicator pf as a function of the color Co of the corresponding pixel Pix (illustrated function MOD_PF (pf, Vi, Pix, Co)); For the projection of said image Ip on the projection surface S, the light module ML is adapted to project onto the projection surface S the light beam Fx with the intensity values VI corrected for the intensity indicators pf (illustrated function PROJ (ML, Fx, Vi, pf)). It will be noted that the processing unit PR is integrated into the light module ML or is independent of said light module ML. Of course the description - of the patent application PCT / EP2016 / 071596 filed on September 13, 2016 is not limited to the embodiments described above. Thus, in another nonlimiting embodiment, a type B goniophtometer can also be used, that is to say that the rotational movement around the vertical axis supports the rotational movement around the horizontal axis .. Thus, in another nonlimiting embodiment, the processing unit PR be offset relative to the lighting device DISP. Thus, the step for calculating the observation position PosO2 in the image reference frame R1 can be carried out before or at the same time as the calculation of the luminance position PosL2. Thus, the motor vehicle V comprises one or more lighting devices DISP adapted to implement the MTH projection method described. Thus, the PCT / EP2016 / 071596 patent application has the following advantages in particular: - it projects an image comprising at least one graphic symbol which makes it possible to improve the comfort and / or safety of an observer who is inside or outside the motor vehicle; - It allows to project an image that is visible and understandable by a determined observer because said projection depends on the position of said observer; The same projection process is thus applied to project an image understandable by the driver or to project an image understandable by the pedestrian or even by a driver of a following vehicle for example; it allows to distort the image to be projected lp so that it can be understood by a determined observer. An anamorphosis of an image is thus created, said anamorphosis being dependent on the observation position of the observer O; - the observation position of the observer in the image reference system is a function of the position and the rotation of said image to be projected. Thanks to the rotation which depends in particular on a site angle, when the latter is adjusted in a particular way, the observer has the impression of seeing an image in 3D; - it makes it possible to integrate the information to be projected into the lighting beam Fx of the light module ML of the motor vehicle. It is not necessary to have a dedicated additional light source; - thus, unlike a prior art which displays an image directly on the rear light window of the motor vehicle and which may appear too small at a certain distance, the invention allows an outside observer who is at a certain distance from said motor vehicle to see the image well since the latter is projected as a function of the position of the observer and on a projection surface which is not the window of a motor vehicle light. The dimensions of the image to be projected lp are no longer limited to the small projection surface such as the fire window; - it makes it possible to propose a solution which can be used for a recipient of information who can only see the front or the sides of the motor vehicle for example, unlike a solution which displays an image on the rear lights of the motor vehicle ; - it makes it possible to propose another solution than an image display (s) on the rear lights of the motor vehicle; - it makes it possible to propose another solution than an image projection (s) only dedicated (s) to the driver of the motor vehicle.
权利要求:
Claims (12) [1" id="c-fr-0001] 1, - Method for projecting at least one image by a projection system (2) of a motor vehicle comprising a detection device (4) capable of detecting a disturbance zone (38) and of generating a signal alert, a processing unit (10) capable of generating a control signal; an imaging device (18) suitable for receiving the control signal and projecting a digital image, a storage unit (6) storing at least one image representative of a pictogram, characterized in that the method comprises the following steps: a) detection (30) of a disturbance zone (38), b) emission (42) of an alert signal, and on reception of the alert signal: c) generation of a control signal, said generation step comprising the following substeps: • determination (44) of the position of the conductor in a predefined reference frame called the projection reference frame (Rp), • calculation (46) of a transformation matrix (M) of an image as a function of the position of the determined conductor, generation (54) of the control signal, the control signal being obtained by applying said transformation matrix (M) to at least one stored image, d) control (56) of the imaging device (18) from the control signal to project a transformed image, the pictogram appearing in a vertical plane (PV) for said conductor in the transformed image, and e) projection (58) of the transformed image. [2" id="c-fr-0002] 2, - projection method according to claim 1, in which the transformation matrix (M) is shaped to rotate the stored image by an angle of 90 ° relative to a horizontal axis A extending on the roadway, said horizontal axis (A) being perpendicular to the direction of movement of the vehicle. [3" id="c-fr-0003] 3, - projection method according to any one of claims 1 and 2, in which the step of generating a control signal further comprises a step of adding (52) at least one shadow zone on said transformed image so that the pictogram represented on the transformed image is perceived in relief by said driver. [4" id="c-fr-0004] 4, - projection method according to any one of claims 1 to 3, in which the detection device (4) comprises a receiver (3) suitable for receiving an electromagnetic signal, and a system (5) for geographic location of the motor vehicle, and in which the step of detecting (30) the disturbance zone (38) comprises the following steps: - determination (32) of the geographical position of the motor vehicle; - reception (34) of an electromagnetic signal capable of indicating at least one disturbance zone (38) on the road; - Determination (36) of the possibility of passage of the motor vehicle in the disturbance zone; - Development (40) of an alert signal when the motor vehicle will cross the disturbance zone (38). [5" id="c-fr-0005] 5, - projection method according to claim 4, wherein the electromagnetic signal is a radio signal from a radio signal, a signal from a wireless telecommunications network and a signal from a computer network governed by the defined communication protocol by the standards of the IEEE 802.11 group. [6" id="c-fr-0006] 6, - A projection method according to claim 4, wherein the electromagnetic signal is a light signal for example a signal having a wavelength between 400 and 800 nanometers. [7" id="c-fr-0007] 7, - A projection method according to any one of claims 1 to 3, in which the projection system (2) comprises a camera, and in which the detection step (30) comprises the following steps: - acquisition (60) of at least one image representative of the roadway by said camera, and - processing (62) of said at least one acquired image to detect the existence of a disturbance zone - development (64) of an alert signal. [8" id="c-fr-0008] 8, - projection method according to any one of claims 1 to 7, in which the pictogram is a representative image of an element among a road block, a road sign, guide lines or arrows and site stakes. [9" id="c-fr-0009] 9, - projection method according to any one of claims 1 to 8, which further comprises a step of capturing an image of the car driver and in which the step of determining the position of the driver in a predefined reference frame called the projection reference frame (Rp) is implemented from the captured image. [10" id="c-fr-0010] 10, - Projection system (2) of at least one image of a motor vehicle, said projection system (2) comprising: - a storage unit (6) capable of storing at least one image representing a pictogram; a detection device (4) capable of detecting a disturbance zone (38), said detection device (4) being adapted to generate an alert signal on detection of the disturbance zone (38), - a processing unit (10) connected to the imager (8) and to the detection device (4), the processing unit (10) being able to calculate a transformation matrix (M) as a function of the position of the conductor and generating a control signal from the transformation matrix (M) and the stored image, and - an imaging device (18) capable of projecting at least one transformed image from a received control signal, the transformed image being intended to appear in a vertical plane for said driver of the motor vehicle. [11" id="c-fr-0011] 11, - projection system (2) according to claim 10 comprising a light source (16) capable of emitting a light beam towards the imaging device (18), and a projection device (10) capable of projecting on the 5 floor image transformed. [12" id="c-fr-0012] 12, - projection system (2) according to any one of claims 10 and 11, comprising an imager capable of capturing the driver and in which the processing unit (10) is capable of finding the position of the 10 conductor on the captured image and define the transformation matrix (M) from the determined conductor position. 1/13 y
类似技术:
公开号 | 公开日 | 专利标题 EP3300942B1|2021-11-17|Method for projecting images by a projection system of a motor vehicle, and associated projection system JP6818100B2|2021-01-20|Projection type display device EP3306592B1|2019-03-06|Method for projecting an image by a projection system of a motor vehicle, and associated projection system EP3350018A1|2018-07-25|Projection method for a motor vehicle, for projecting an image onto a projection surface EP3705335A1|2020-09-09|Lighting device for vehicle with driver assistance information display FR2864311A1|2005-06-24|Display system for vehicle e.g. automobile, has extraction unit extracting position on image of scene from red light element of recognized object, and windshield having display zone in which display image is superposed on scene FR2955944A1|2011-08-05|DEVICE FOR DISPLAYING INFORMATION ON THE WINDSHIELD OF A MOTOR VEHICLE FR3048788A1|2017-09-15|SCREEN IMAGE PROJECTOR WITH LIGHT SOURCE WITH ELECTROLUMINESCENT STICKS US10957227B2|2021-03-23|Vehicle-mounted, location-controlled sign FR3056773A1|2018-03-30|DEVICE FOR AIDING THE DRIVING OF A MOTOR VEHICLE FR3055431B1|2019-08-02|DEVICE FOR PROJECTING A PIXELIZED IMAGE FR3054889B1|2019-11-01|VISUAL DRIVING AIDS SYSTEM FR3100495A1|2021-03-12|Interaction system between an autonomous vehicle and a pedestrian or cyclist FR3056501A1|2018-03-30|MOTOR VEHICLE IDENTIFICATION ASSISTANCE SYSTEM AND METHOD FOR IMPLEMENTING THE SAME WO2021047991A1|2021-03-18|System for interaction between an autonomous vehicle and a colour-blind pedestrian or cyclist FR3056771B1|2019-10-11|DRIVING ASSISTANCE DEVICE FOR A MOTOR VEHICLE EP1791078A1|2007-05-30|Obstacle detection process implemented in an automobile vehicle FR3056772B1|2019-10-11|DRIVING ASSISTANCE DEVICE FOR A MOTOR VEHICLE FR3075141A1|2019-06-21|VEHICLE SEAT ASSEMBLY WITH DRIVING AID FUNCTIONS FR3073052A1|2019-05-03|HEAD-HIGH DISPLAY DEVICE FOR VEHICLE FR3093220A1|2020-08-28|Method of displaying a vehicle environment WO2020070078A1|2020-04-09|Method for controlling modules for projecting pixelated light beams for a vehicle FR3082164A1|2019-12-13|AID TO NAVIGATION BY VISUAL DISPLAY AND ASSOCIATED DEVICE FR3060828A1|2018-06-22|SYSTEM FOR BROADCASTING ADVERTISING MESSAGES BY A MOTOR VEHICLE FR3056488A1|2018-03-30|METHOD FOR CONTROLLING AUTOMATIC DISPLAY OF A PICTOGRAM REPRESENTING A LILT SITUATION BY A NEXT VEHICLE
同族专利:
公开号 | 公开日 FR3056490B1|2018-10-12| EP3306592A1|2018-04-11| US20180086262A1|2018-03-29| CN107878300A|2018-04-06| US10696223B2|2020-06-30| EP3306592B1|2019-03-06|
引用文献:
公开号 | 申请日 | 公开日 | 申请人 | 专利标题 WO2000035200A1|1998-12-07|2000-06-15|Universal City Studios, Inc.|Image correction method to compensate for point of view image distortion| US20140049384A1|2012-08-16|2014-02-20|GM Global Technology Operations LLC|Method for warning a following motor vehicle| EP2896937A1|2014-01-21|2015-07-22|Harman International Industries, Incorporated|Roadway projection system| US6977630B1|2000-07-18|2005-12-20|University Of Minnesota|Mobility assist device| EP2307855A1|2008-07-31|2011-04-13|Tele Atlas B.V.|Computer arrangement and method for displaying navigation data in 3d| EP2740632A1|2012-12-07|2014-06-11|Urs Nüssli|Lateral rearview mirror systemfor a vehicle, method for projecting an image to the environment of the vehicle and corresponding application program product| TWI531495B|2012-12-11|2016-05-01|Automatic Calibration Method and System for Vehicle Display System| FR3000570B1|2012-12-28|2016-04-29|Valeo Etudes Electroniques|DISPLAY FOR DISPLAYING IN THE FIELD OF VISION OF A VEHICLE DRIVER A VIRTUAL IMAGE AND IMAGE GENERATING DEVICE FOR SAID DISPLAY| JP6273976B2|2014-03-31|2018-02-07|株式会社デンソー|Display control device for vehicle| GB2525655B|2014-05-01|2018-04-25|Jaguar Land Rover Ltd|Dynamic lighting apparatus and method| FR3026689A1|2014-10-02|2016-04-08|Valeo Vision|A PICTOGRAM DISPLAY SIGNALING DEVICE FOR A MOTOR VEHICLE, AND A SIGNALING LIGHT PROVIDED WITH SUCH A LUMINOUS DEVICE| US9632664B2|2015-03-08|2017-04-25|Apple Inc.|Devices, methods, and graphical user interfaces for manipulating user interface objects with visual and/or haptic feedback| SE539221C2|2015-06-04|2017-05-23|Scania Cv Ab|Method and control unit for avoiding an accident at a crosswalk| SE539097C2|2015-08-20|2017-04-11|Scania Cv Ab|Method, control unit and system for avoiding collision with vulnerable road users| US10053001B1|2015-09-24|2018-08-21|Apple Inc.|System and method for visual communication of an operational status| US10640117B2|2016-08-17|2020-05-05|Allstate Insurance Company|Driving cues and coaching|CN108349429B|2015-10-27|2021-03-23|株式会社小糸制作所|Lighting device for vehicle, vehicle system and vehicle| US10261515B2|2017-01-24|2019-04-16|Wipro Limited|System and method for controlling navigation of a vehicle| JP6554131B2|2017-03-15|2019-07-31|株式会社Subaru|Vehicle display system and method for controlling vehicle display system|
法律状态:
2017-09-29| PLFP| Fee payment|Year of fee payment: 2 | 2018-03-30| PLSC| Publication of the preliminary search report|Effective date: 20180330 | 2018-09-28| PLFP| Fee payment|Year of fee payment: 3 | 2019-09-30| PLFP| Fee payment|Year of fee payment: 4 | 2020-09-30| PLFP| Fee payment|Year of fee payment: 5 | 2021-09-30| PLFP| Fee payment|Year of fee payment: 6 |
优先权:
[返回顶部]
申请号 | 申请日 | 专利标题 FR1659286A|FR3056490B1|2016-09-29|2016-09-29|METHOD FOR PROJECTING AN IMAGE BY A PROJECTION SYSTEM OF A MOTOR VEHICLE, AND ASSOCIATED PROJECTION SYSTEM| FR1659286|2016-09-29|FR1659286A| FR3056490B1|2016-09-29|2016-09-29|METHOD FOR PROJECTING AN IMAGE BY A PROJECTION SYSTEM OF A MOTOR VEHICLE, AND ASSOCIATED PROJECTION SYSTEM| EP17192163.8A| EP3306592B1|2016-09-29|2017-09-20|Method for projecting an image by a projection system of a motor vehicle, and associated projection system| US15/720,412| US10696223B2|2016-09-29|2017-09-29|Method for projecting an image by a projection system of a motor vehicle, and associated projection system| CN201710914733.XA| CN107878300A|2016-09-29|2017-09-29|Method and associated optical projection system by the projection system projects image of motor vehicles| 相关专利
Sulfonates, polymers, resist compositions and patterning process
Washing machine
Washing machine
Device for fixture finishing and tension adjusting of membrane
Structure for Equipping Band in a Plane Cathode Ray Tube
Process for preparation of 7 alpha-carboxyl 9, 11-epoxy steroids and intermediates useful therein an
国家/地区
|